Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Data ; 11(1): 410, 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38649693

ABSTRACT

Uterine myomas are the most common pelvic tumors in women, which can lead to abnormal uterine bleeding, abdominal pain, pelvic compression symptoms, infertility, or adverse pregnancy. In this article, we provide a dataset named uterine myoma MRI dataset (UMD), which can be used for clinical research on uterine myoma imaging. The UMD is the largest publicly available uterine MRI dataset to date including 300 cases of uterine myoma T2-weighted imaging (T2WI) sagittal patient images and their corresponding annotation files. The UMD covers 9 types of uterine myomas classified by the International Federation of Obstetrics and Gynecology (FIGO), which were annotated and reviewed by 11 experienced doctors to ensure the authority of the annotated data. The UMD is helpful for uterine myomas classification and uterine 3D reconstruction tasks, which has important implications for clinical research on uterine myomas.


Subject(s)
Leiomyoma , Magnetic Resonance Imaging , Uterine Neoplasms , Female , Humans , Uterine Neoplasms/diagnostic imaging , Leiomyoma/diagnostic imaging , Uterus/diagnostic imaging
2.
Phys Med Biol ; 69(4)2024 Feb 05.
Article in English | MEDLINE | ID: mdl-38198729

ABSTRACT

Medical image segmentation algorithms based on deep learning have achieved good segmentation results in recent years, but they require a large amount of labeled data. When performing pixel-level labeling on medical images, labeling a target requires marking ten or even hundreds of points along its edge, which requires a lot of time and labor costs. To reduce the labeling cost, we utilize a click-based interactive segmentation method to generate high-quality segmentation labels. However, in current interactive segmentation algorithms, only the interaction information clicked by the user and the image features are fused as the input of the backbone network (so-called early fusion). The early fusion method has the problem that the interactive information is much sparse at this time. Furthermore, the interactive segmentation algorithms do not take into account the boundary problem, resulting in poor model performance. So we propose early fusion and late fusion strategy to prevent the interaction information from being diluted prematurely and make better use of the interaction information. At the same time, we propose a decoupled head structure, by extracting the image boundary information, and combining the boundary loss function to establish the boundary constraint term, so that the network can pay more attention to the boundary information and further improve the performance of the network. Finally, we conduct experiments on three medical datasets (Chaos, VerSe and Uterine Myoma MRI) to verify the effectiveness of our network. The experimental results show that our network greatly improved compared with the baseline, and NoC@80(the number of interactive clicks over 80% of the IoU threshold) improved by 0.1, 0.1, and 0.2. In particular, we have achieved a NoC@80 score of 1.69 on Chaos. According to statistics, manual annotation takes 25 min to label a case(Uterine Myoma MRI). Annotating a medical image with our method can be done in only 2 or 3 clicks, which can save more than 50% of the cost.


Subject(s)
Deep Learning , Myoma , Humans , Algorithms , Time , Image Processing, Computer-Assisted/methods
3.
Diagnostics (Basel) ; 13(9)2023 Apr 24.
Article in English | MEDLINE | ID: mdl-37174917

ABSTRACT

Uterine myomas affect 70% of women of reproductive age, potentially impacting their fertility and health. Manual film reading is commonly used to identify uterine myomas, but it is time-consuming, laborious, and subjective. Clinical treatment requires the consideration of the positional relationship among the uterine wall, uterine cavity, and uterine myomas. However, due to their complex and variable shapes, the low contrast of adjacent tissues or organs, and indistinguishable edges, accurately identifying them in MRI is difficult. Our work addresses these challenges by proposing an instance segmentation network capable of automatically outputting the location, category, and masks of each organ and lesion. Specifically, we designed a new backbone that facilitates learning the shape features of object diversity, and filters out background noise interference. We optimized the anchor box generation strategy to provide better priors in order to enhance the process of bounding box prediction and regression. An adaptive iterative subdivision strategy ensures that the mask boundary details of objects are more realistic and accurate. We conducted extensive experiments to validate our network, which achieved better average precision (AP) results than those of state-of-the-art instance segmentation models. Compared to the baseline network, our model improved AP on the uterine wall, uterine cavity, and myomas by 8.8%, 8.4%, and 3.2%, respectively. Our work is the first to realize multiclass instance segmentation in uterine MRI, providing a convenient and objective reference for the clinical development of appropriate surgical plans, and has significant value in improving diagnostic efficiency and realizing the automatic auxiliary diagnosis of uterine myomas.

SELECTION OF CITATIONS
SEARCH DETAIL
...